10 research outputs found

    3D Acquisition of Mirroring Objects using Striped Patterns

    Get PDF
    Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self-coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object

    Relighting objects from image collections

    Get PDF
    We present an approach for recovering the reflectance of a static scene with known geometry from a collection of images taken under distant, unknown illumination. In contrast to previous work, we allow the illumination to vary between the images, which greatly increases the applicability of the approach. Using an all-frequency relighting framework based on wavelets, we are able to simultaneously estimate the per-image incident illumination and the persurface point reflectance. The wavelet framework allows for incorporating various reflection models. We demonstrate the quality of our results for synthetic test cases as well as for several datasets captured under laboratory conditions. Combined with multi-view stereo reconstruction, we are even able to recover the geometry and reflectance of a scene solely using images collected from the Internet

    Intrinsic Textures for Relightable Free-Viewpoint Video

    Get PDF
    This paper presents an approach to estimate the intrinsic texture properties (albedo, shading, normal) of scenes from multiple view acquisition under unknown illumination conditions. We introduce the concept of intrinsic textures, which are pixel-resolution surface textures representing the intrinsic appearance parameters of a scene. Unlike previous video relighting methods, the approach does not assume regions of uniform albedo, which makes it applicable to richly textured scenes. We show that intrinsic image methods can be used to refine an initial, low-frequency shading estimate based on a global lighting reconstruction from an original texture and coarse scene geometry in order to resolve the inherent global ambiguity in shading. The method is applied to relighting of free-viewpoint rendering from multiple view video capture. This demonstrates relighting with reproduction of fine surface detail. Quantitative evaluation on synthetic models with textured appearance shows accurate estimation of intrinsic surface reflectance properties. © 2014 Springer International Publishing

    A framework for the acquisition, processing, transmission, and interactive display of high quality 3D models on the web

    No full text
    Available from TIB Hannover: RR 1912(2001-4-002) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekSIGLEDEGerman

    Enhancement of Visual Realism with BRDF for Patient Specific Bronchoscopy Simulation

    No full text
    Abstract. This paper presents a novel method for photorealistic rendering of the bronchial lumen by directly deriving matched shading and texture parameters from video bronchoscope images. 2D/3D registration is used to match video bronchoscope images with 3D CT scan of the same patient, such that patient specific modelling and simulation with improved visual realism can be achieved. With the proposed method, shading parameters are recovered by modelling the bidirectional reflectance distribution function (BRDF) of the visible surfaces by exploiting the restricted lighting configurations imposed by the bronchoscope. The derived BRDF is then used to predict the expected shading intensity such that a texture map independent of lighting conditions can be extracted. This allows the generation of new views not captured in the original bronchoscopy video, thus allowing free navigation of the acquired 3D model with enhanced photo-realism.

    Spatio-temporal Reflectance Sharing for Relightable 3D Video

    No full text
    Abstract. In our previous work [21], we have shown that by means of a model-based approach, relightable free-viewpoint videos of human actors can be reconstructed from only a handful of multi-view video streams recorded under calibrated illumination. To achieve this purpose, we employ a marker-free motion capture approach to measure dynamic human scene geometry. Reflectance samples for each surface point are captured by exploiting the fact that, due to the person’s motion, each surface location is, over time, exposed to the acquisition sensors under varying orientations. Although this is the first setup of its kind to measure surface reflectance from footage of arbitrary human performances, our approach may lead to a biased sampling of surface reflectance since each surface point is only seen under a limited number of half-vector directions. We thus propose in this paper a novel algorithm that reduces the bias in BRDF estimates of a single surface point by cleverly taking into account reflectance samples from other surface locations made of similar material. We demonstrate the improvements achieved with this spatio-temporal reflectance sharing approach both visually and quantitatively.
    corecore